skip to main content


Search for: All records

Creators/Authors contains: "Huang, Xiao"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. The problem of few-shot graph classification targets at assigning class labels for graph samples, where only limited labeled graphs are provided for each class. To solve the problem brought by label scarcity, recent studies have proposed to adopt the prevalent few-shot learning framework to achieve fast adaptations to graph classes with limited labeled graphs. In particular, these studies typically propose to accumulate meta-knowledge across a large number of meta-training tasks, and then generalize such meta-knowledge to meta-test tasks sampled from a disjoint class set. Nevertheless, existing studies generally ignore the crucial task correlations among meta-training tasks and treat them independently. In fact, such task correlations can help promote the model generalization to meta-test tasks and result in better classification performance. On the other hand, it remains challenging to capture and utilize task correlations due to the complex components and interactions in meta-training tasks. To deal with this, we propose a novel few-shot graph classification framework FAITH to capture task correlations via learning a hierarchical task structure at different granularities. We further propose a task-specific classifier to incorporate the learned task correlations into the few-shot graph classification process. Moreover, we derive FAITH+, a variant of FAITH that can improve the sampling process for the hierarchical task structure. The extensive experiments on four prevalent graph datasets further demonstrate the superiority of FAITH and FAITH+ over other state-of-the-art baselines.

     
    more » « less
    Free, publicly-accessible full text available April 30, 2025
  2. Abstract

    Heart failure (HF) remains a global public health burden and often results following myocardial infarction (MI). Following injury, cardiac fibrosis forms in the myocardium which greatly hinders cellular function, survival, and recruitment, thus severely limits tissue regeneration. Here, we leverage biophysical microstructural cues made of hyaluronic acid (HA) loaded with the anti-fibrotic proteoglycan decorin to more robustly attenuate cardiac fibrosis after acute myocardial injury. Microrods showed decorin incorporation throughout the entirety of the hydrogel structures and exhibited first-order release kinetics in vitro. Intramyocardial injections of saline (n = 5), microrods (n = 7), decorin microrods (n = 10), and free decorin (n = 4) were performed in male rat models of ischemia-reperfusion MI to evaluate therapeutic effects on cardiac remodeling and function. Echocardiographic analysis demonstrated that rats treated with decorin microrods (5.21% ± 4.29%) exhibited significantly increased change in ejection fraction (EF) at 8 weeks post-MI compared to rats treated with saline (−4.18% ± 2.78%,p < 0.001) and free decorin (−3.42% ± 1.86%,p < 0.01). Trends in reduced end diastolic volume were also identified in decorin microrod-treated groups compared to those treated with saline, microrods, and free decorin, indicating favorable ventricular remodeling. Quantitative analysis of histology and immunofluorescence staining showed that treatment with decorin microrods reduced cardiac fibrosis (p < 0.05) and cardiomyocyte hypertrophy (p < 0.05) at 8 weeks post-MI compared to saline control. Together, this work aims to contribute important knowledge to guide rationally designed biomaterial development that may be used to successfully treat cardiovascular diseases.

     
    more » « less
  3. Hurricane Harvey in 2017 marked an important transition where many disaster victims used social media rather than the overloaded 911 system to seek rescue. This article presents a machine-learning-based detector of rescue requests from Harvey-related Twitter messages, which differentiates itself from existing ones by accounting for the potential impacts of ZIP codes on both the preparation of training samples and the performance of different machine learning models. We investigate how the outcomes of our ZIP code filtering differ from those of a recent, comparable study in terms of generating training data for machine learning models. Following this, experiments are conducted to test how the existence of ZIP codes would affect the performance of machine learning models by simulating different percentages of ZIP-code-tagged positive samples. The findings show that (1) all machine learning classifiers except K-nearest neighbors and Naïve Bayes achieve state-of-the-art performance in detecting rescue requests from social media; (2) using ZIP code filtering could increase the effectiveness of gathering rescue requests for training machine learning models; (3) machine learning models are better able to identify rescue requests that are associated with ZIP codes. We thereby encourage every rescue-seeking victim to include ZIP codes when posting messages on social media. This study is a useful addition to the literature and can be helpful for first responders to rescue disaster victims more efficiently. 
    more » « less
  4. Few-shot graph classification aims at predicting classes for graphs, given limited labeled graphs for each class. To tackle the bottleneck of label scarcity, recent works propose to incorporate few-shot learning frameworks for fast adaptations to graph classes with limited labeled graphs. Specifically, these works propose to accumulate meta-knowledge across diverse meta-training tasks, and then generalize such meta-knowledge to the target task with a disjoint label set. However, existing methods generally ignore task correlations among meta-training tasks while treating them independently. Nevertheless, such task correlations can advance the model generalization to the target task for better classification performance. On the other hand, it remains non-trivial to utilize task correlations due to the complex components in a large number of meta-training tasks. To deal with this, we propose a novel few-shot learning framework FAITH that captures task correlations via constructing a hierarchical task graph at different granularities. Then we further design a loss-based sampling strategy to select tasks with more correlated classes. Moreover, a task-specific classifier is proposed to utilize the learned task correlations for few-shot classification. Extensive experiments on four prevalent few-shot graph classification datasets demonstrate the superiority of FAITH over other state-of-the-art baselines. 
    more » « less
  5. Attributed networks are a type of graph structured data used in many real-world scenarios. Detecting anomalies on attributed networks has a wide spectrum of applications such as spammer detection and fraud detection. Although this research area draws increasing attention in the last few years, previous works are mostly unsupervised because of expensive costs of labeling ground truth anomalies. Many recent studies have shown different types of anomalies are often mixed together on attributed networks and such invaluable human knowledge could provide complementary insights in advancing anomaly detection on attributed networks. To this end, we study the novel problem of modeling and integrating human knowledge of different anomaly types for attributed network anomaly detection. Specifically, we first model prior human knowledge through a novel data augmentation strategy. We then integrate the modeled knowledge in a Siamese graph neural network encoder through a well-designed contrastive loss. In the end, we train a decoder to reconstruct the original networks from the node representations learned by the encoder, and rank nodes according to its reconstruction error as the anomaly metric. Experiments on five real-world datasets demonstrate that the proposed framework outperforms the state-of-the-art anomaly detection algorithms. 
    more » « less
  6. Human mobility studies have become increasingly important and diverse in the past decade with the support of social media big data that enables human mobility to be measured in a harmonized and rapid manner. However, what is less explored in the current scholarship is episodic mobility as a special type of human mobility defined as the abnormal mobility triggered by episodic events excess to the normal range of mobility at large. Drawing on a large-scale systematic collection of 1.9 billion geotagged Twitter data from 2017 to 2020, this study contributes to the first empirical study of episodic mobility by producing a daily Twitter census of visitors at the U.S. county level and proposing multiple statistical approaches to identify and quantify episodic mobility. It is followed by four case studies of episodic mobility in U.S. national wide to showcase the great potential of Twitter data and our proposed method to detect episodic mobility subject to episodic events that occur both regularly and sporadically. This study provides new insights on episodic mobility in terms of its conceptual and methodological framework and empirical knowledge, which enriches the current mobility research paradigm. 
    more » « less
  7. Knowledge graphs (KGs) are of great importance in various artificial intelligence systems, such as question answering, relation extraction, and recommendation. Nevertheless, most real-world KGs are highly incomplete, with many missing relations between entities. To discover new triples (i.e., head entity, relation, tail entity), many KG completion algorithms have been proposed in recent years. However, a vast majority of existing studies often require a large number of training triples for each relation, which contradicts the fact that the frequency distribution of relations in KGs often follows a long tail distribution, meaning a majority of relations have only very few triples. Meanwhile, since most existing large-scale KGs are constructed automatically by extracting information from crowd-sourcing data using heuristic algorithms, plenty of errors could be inevitably incorporated due to the lack of human verification, which greatly reduces the performance for KG completion. To tackle the aforementioned issues, in this paper, we study a novel problem of error-aware few-shot KG completion and present a principled KG completion framework REFORM. Specifically, we formulate the problem under the few-shot learning framework, and our goal is to accumulate meta-knowledge across different meta-tasks and generalize the accumulated knowledge to the meta-test task for error-aware few-shot KG completion. To address the associated challenges resulting from insufficient training samples and inevitable errors, we propose three essential modules neighbor encoder, cross-relation aggregation, and error mitigation in each meta-task. Extensive experiments on three widely used KG datasets demonstrate the superiority of the proposed framework REFORM over competitive baseline methods. 
    more » « less
  8. Abstract Shaped by human movement, place connectivity is quantified by the strength of spatial interactions among locations. For decades, spatial scientists have researched place connectivity, applications, and metrics. The growing popularity of social media provides a new data stream where spatial social interaction measures are largely devoid of privacy issues, easily assessable, and harmonized. In this study, we introduced a global multi-scale place connectivity index (PCI) based on spatial interactions among places revealed by geotagged tweets as a spatiotemporal-continuous and easy-to-implement measurement. The multi-scale PCI, demonstrated at the US county level, exhibits a strong positive association with SafeGraph population movement records (10% penetration in the US population) and Facebook’s social connectedness index (SCI), a popular connectivity index based on social networks. We found that PCI has a strong boundary effect and that it generally follows the distance decay, although this force is weaker in more urbanized counties with a denser population. Our investigation further suggests that PCI has great potential in addressing real-world problems that require place connectivity knowledge, exemplified with two applications: (1) modeling the spatial spread of COVID-19 during the early stage of the pandemic and (2) modeling hurricane evacuation destination choice. The methodological and contextual knowledge of PCI, together with the open-sourced PCI datasets at various geographic levels, are expected to support research fields requiring knowledge in human spatial interactions. 
    more » « less